10 research outputs found

    Psychophysics, Gestalts and Games

    Get PDF
    International audienceMany psychophysical studies are dedicated to the evaluation of the human gestalt detection on dot or Gabor patterns, and to model its dependence on the pattern and background parameters. Nevertheless, even for these constrained percepts, psychophysics have not yet reached the challenging prediction stage, where human detection would be quantitatively predicted by a (generic) model. On the other hand, Computer Vision has attempted at defining automatic detection thresholds. This chapter sketches a procedure to confront these two methodologies inspired in gestaltism. Using a computational quantitative version of the non-accidentalness principle, we raise the possibility that the psychophysical and the (older) gestaltist setups, both applicable on dot or Gabor patterns, find a useful complement in a Turing test. In our perceptual Turing test, human performance is compared by the scientist to the detection result given by a computer. This confrontation permits to revive the abandoned method of gestaltic games. We sketch the elaboration of such a game, where the subjects of the experiment are confronted to an alignment detection algorithm, and are invited to draw examples that will fool it. We show that in that way a more precise definition of the alignment gestalt and of its computational formulation seems to emerge. Detection algorithms might also be relevant to more classic psychophysical setups, where they can again play the role of a Turing test. To a visual experiment where subjects were invited to detect alignments in Gabor patterns, we associated a single function measuring the alignment detectability in the form of a number of false alarms (NFA). The first results indicate that the values of the NFA, as a function of all simulation parameters, are highly correlated to the human detection. This fact, that we intend to support by further experiments , might end up confirming that human alignment detection is the result of a single mechanism

    How Bodies and Voices Interact in Early Emotion Perception

    Get PDF
    Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta–band oscillations (15–25 Hz) primarily reflecting biological motion perception was modulated 200–400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing

    An on-line task for contrasting auditory processing in the verbal and nonverbal domains and norms for younger and older adults

    No full text
    Contrasting linguistic and nonlinguistic processing has been of interest to many researchers with different scientific, theoretical, or clinical questions. However, previous work on this type of comparative analysis and experimentation has been limited. In particular, little is known about the differences and similarities between the perceptual, cognitive, and neural processing of nonverbal environmental sounds and that of speech sounds. With the aim of contrasting verbal and nonverbal processing in the auditory modality, we developed a new on-line measure that can be administered to subjects from different clinical, neurological, or sociocultural groups. This is an on-line task of sound to picture matching, in which the sounds are either environmental sounds or their linguistic equivalents and which is controlled for potential task and item confounds across the two sound types. Here, we describe the design and development of our measure and report norming data for healthy subjects from two different adult age groups: younger adults (18–24 years of age) and older adults (54–78 years of age). We also outline other populations to which the test has been or is being administered. In addition to the results reported here, the test can be useful to other researchers who are interested in systematically contrasting verbal and nonverbal auditory processing in other populations

    Voxel-based lesion: symptom mapping

    No full text
    For more than a century, lesion–symptom mapping studies have yielded valuable insights into the relationships between brain and behavior, but newer imaging techniques have surpassed lesion analysis in examining functional networks. Here we used a new method—voxel-based lesion–symptom mapping (VLSM)—to analyze the relationship between tissue damage and behavior on a voxel-by-voxel basis, as in functional neuroimaging. We applied VLSM to measures of speech fluency and language comprehension in 101 left-hemisphere-damaged aphasic patients: the VLSM maps for these measures confirm the anticipated contrast between anterior and posterior areas, and they also indicate that interacting regions facilitate fluency and auditory comprehension, in agreement with findings from modern brain imaging
    corecore